Close

1. Identity statement
Reference TypeConference Paper (Conference Proceedings)
Sitesibgrapi.sid.inpe.br
Holder Codeibi 8JMKD3MGPEW34M/46T9EHH
Identifier8JMKD3MGPEW34M/3U3ETBS
Repositorysid.inpe.br/sibgrapi/2019/09.15.02.10
Last Update2019:09.15.02.10.34 (UTC) administrator
Metadata Repositorysid.inpe.br/sibgrapi/2019/09.15.02.10.34
Metadata Last Update2022:06.14.00.09.42 (UTC) administrator
DOI10.1109/SIBGRAPI.2019.00043
Citation KeyCardenasCernChav:2019:DySiLa
TitleDynamic Sign Language Recognition Based on Convolutional Neural Networks and Texture Maps
FormatOn-line
Year2019
Access Date2024, Apr. 28
Number of Files1
Size2493 KiB
2. Context
Author1 Cardenas, Edwin Jonathan Escobedo
2 Cerna, Lourdes Ramirez
3 Chavez, Guillermo Camara
Affiliation1 Federal University of Ouro Preto
2 National University of Ouro Preto
3 Federal University of Ouro Preto
EditorOliveira, Luciano Rebouças de
Sarder, Pinaki
Lage, Marcos
Sadlo, Filip
e-Mail Addressedu.escobedo88@gmail.com
Conference NameConference on Graphics, Patterns and Images, 32 (SIBGRAPI)
Conference LocationRio de Janeiro, RJ, Brazil
Date28-31 Oct. 2019
PublisherIEEE Computer Society
Publisher CityLos Alamitos
Book TitleProceedings
Tertiary TypeFull Paper
History (UTC)2019-09-15 02:10:34 :: edu.escobedo88@gmail.com -> administrator ::
2022-06-14 00:09:42 :: administrator -> edu.escobedo88@gmail.com :: 2019
3. Content and structure
Is the master or a copy?is the master
Content Stagecompleted
Transferable1
Version Typefinaldraft
KeywordsCNN
sign language
texture maps
AbstractSign language recognition (SLR) is a very challenging task due to the complexity of learning or developing descriptors to represent its primary parameters (location, movement, and hand configuration). In this paper, we propose a robust deep learning based method for sign language recognition. Our approach represents multimodal information (RGB-D) through texture maps to describe the hand location and movement. Moreover, we introduce an intuitive method to extract a representative frame that describes the hand shape. Next, we use this information as inputs to two three-stream and two-stream CNN models to learn robust features capable of recognizing a dynamic sign. We conduct our experiments on two sign language datasets, and the comparison with state-of-the-art SLR methods reveal the superiority of our approach which optimally combines texture maps and hand shape for SLR tasks.
Arrangement 1urlib.net > SDLA > Fonds > SIBGRAPI 2019 > Dynamic Sign Language...
Arrangement 2urlib.net > SDLA > Fonds > Full Index > Dynamic Sign Language...
doc Directory Contentaccess
source Directory Contentthere are no files
agreement Directory Content
agreement.html 14/09/2019 23:10 1.2 KiB 
4. Conditions of access and use
data URLhttp://urlib.net/ibi/8JMKD3MGPEW34M/3U3ETBS
zipped data URLhttp://urlib.net/zip/8JMKD3MGPEW34M/3U3ETBS
Languageen
Target FilePID111.pdf
User Groupedu.escobedo88@gmail.com
Visibilityshown
Update Permissionnot transferred
5. Allied materials
Mirror Repositorysid.inpe.br/banon/2001/03.30.15.38.24
Next Higher Units8JMKD3MGPEW34M/3UA4FNL
8JMKD3MGPEW34M/3UA4FPS
8JMKD3MGPEW34M/4742MCS
Citing Item Listsid.inpe.br/sibgrapi/2019/10.25.18.30.33 14
Host Collectionsid.inpe.br/banon/2001/03.30.15.38
6. Notes
Empty Fieldsarchivingpolicy archivist area callnumber contenttype copyholder copyright creatorhistory descriptionlevel dissemination edition electronicmailaddress group isbn issn label lineage mark nextedition notes numberofvolumes orcid organization pages parameterlist parentrepositories previousedition previouslowerunit progress project readergroup readpermission resumeid rightsholder schedulinginformation secondarydate secondarykey secondarymark secondarytype serieseditor session shorttitle sponsor subject tertiarymark type url volume
7. Description control
e-Mail (login)edu.escobedo88@gmail.com
update 


Close